5 research outputs found

    A Combined Frequency Scaling and Application Elasticity Approach for Energy-Efficient Virtualized Data Centers

    No full text
    At present, large-scale data centers are typically over-provisioned in order to handle peak load requirements. The resulting low utilization of resources contribute to a huge amounts of power consumption in data centers. The effects of high power consumption manifest in a high operational cost in data centers and carbon footprints to the environment. Therefore, the management solutions for large-scale data centers must be designed to effectively take power consumption into account. In this work, we combine three management techniques that can be used to control systems in an energy-efficient manner: changing the number of virtual machines, changing the number of cores, and scaling the CPU frequencies. The proposed system consists of a controller that combines feedback and feedforward information to determine a configuration that minimizes power consumption while meeting the performance target. The controller can also be configured to accomplish power minimization in a stable manner, without causing large oscillations in the resource allocations. Our experimental evaluation based on the Sysbench benchmark combined with workload traces from production systems shows that our approach achieves the lowest energy consumption among the compared three approaches while meeting the performance target

    Energy-efficient cloud computing : autonomic resource provisioning for datacenters

    No full text
    Energy efficiency has become an increasingly important concern in data centers because of issues associated with energy consumption, such as capital costs, operating expenses, and environmental impact. While energy loss due to suboptimal use of facilities and non-IT equipment has largely been reduced through the use of best-practice technologies, addressing energy wastage in IT equipment still requires the design and implementation of energy-aware resource management systems. This thesis focuses on the development of resource allocation methods to improve energy efficiency in data centers. The thesis employs three approaches to improve efficiency for optimized power and performance: scaling virtual machine (VM) and server processing capabilities to reduce energy consumption; improving resource usage through workload consolidation; and exploiting resource heterogeneity. To achieve these goals, the first part of the thesis proposes models, algorithms, and techniques that reduce energy usage through the use of VM scaling, VM sizing for CPU and memory, CPU frequency adaptation, as well as hardware power capping for server-level resource allocation. The proposed online performance and power models capture system behavior while adapting to changes in the underlying infrastructure. Based on these models, the thesis proposes controllers that dynamically determine power-efficient resource allocations while minimizing performance penalty. These methods are then extended to support resource overbooking and workload consolidation to improve resource utilization and energy efficiency across the cluster or data center. In order to cater for different performance requirements among collocated applications, such as latency-sensitive services and batch jobs, the controllers apply service differentiation among prioritized VMs and performance isolation techniques, including CPU pinning, quota enforcement, and online resource tuning. This thesis also considers resource heterogeneity and proposes heterogeneousaware scheduling techniques to improve energy efficiency by integrating hardware accelerators (in this case FPGAs) and exploiting differences in energy footprint of different servers. In addition, the thesis provides a comprehensive study of the overheads associated with a number of virtualization platforms in order to understand the trade-offs provided by the latest technological advances and to make the best resource allocation decisions accordingly. The proposed methods in this thesis are evaluated by implementing prototypes on real testbeds and conducting experiments using real workload data taken from production systems and synthetic workload data that we generated. Our evaluation results demonstrate that the proposed approaches provide improved energy management of resources in virtualized data centers

    FPGA-Aware Scheduling Strategies at Hypervisor Level in Cloud Environments

    No full text
    Current open issues regarding cloud computing include the support for nontrivial Quality of Service-related Service Level Objectives (SLOs) and reducing the energy footprint of data centers. One strategy that can contribute to both is the integration of accelerators as specialized resources within the cloud system. In particular, Field Programmable Gate Arrays (FPGAs) exhibit an excellent performance/energy consumption ratio that can be harnessed to achieve these goals. In this paper, a multilevel cloud scheduling framework is described, and several FPGA-aware node level scheduling strategies (applied at the hypervisor level) are explored and analyzed. These strategies are based on the use of a multiobjective metric aimed at providing Quality of Service (QoS) support. Results show how the proposed FPGA-aware scheduling policies increment the number of users requests serviced with their SLOs fulfilled while energy consumption is minimized. In particular, evaluation results of a use case based on a multimedia application show that the proposal can save more than 20% of the total energy compared with other baseline algorithms while a higher percentage of Service Level Agreement (SLA) is fulfilled
    corecore